由于它们在现实世界中的广泛采用,提高深神经网络(DNN)的运行时性能至关重要。现有的优化DNN的张量代数表达的方法仅考虑由固定的预定义运算符表示的表达式,在一般表达式之间缺少可能的优化机会。我们提出了Ollie,这是第一个基于衍生的张量程序优化器。 Ollie通过利用一般张量代数表达式之间的转换来优化张量程序,从而实现了一个更大的表达搜索空间,其中包括由先前工作作为特殊情况支持的搜索空间。 Ollie使用基于混合衍生的优化器,该优化器有效地结合了探索性和指导性推导,以快速发现高度优化的表达式。对七个DNN的评估表明,Ollie可以在A100 GPU上胜过2.73 $ \ times $(平均为1.46美元$ \ times $),在V100上最多可超过2.68 $ \ times $(1.51 $ \ times $) GPU分别。
translated by 谷歌翻译
图像BERT使用掩盖图像建模(MIM)预训练成为应对自我监督的表示学习的一种流行实践。一项开创性的作品将MIM作为一个视觉词汇作为分类任务,将连续的视觉信号用于离散的视觉令牌,并使用预先学习的DVAE将其标记为离散的视觉令牌。尽管有可行的解决方案,但不当离散化仍阻碍了图像预训练的进一步改善。由于图像离散化没有基本真相的答案,因此我们认为,即使可以获得更好的``令牌''',也不应使用唯一的令牌ID分配蒙面的补丁。在这项工作中,我们引入了改进的BERT风格图像预训练方法,即MC-BEIT,该方法执行MIM代理任务,以放松和精致的多选择培训目标。 Specifically, the multi-choice supervision for the masked image patches is formed by the soft probability vectors of the discrete token ids, which are predicted by the off-the-shelf image ``tokenizer'' and further refined by high-level inter-补丁感知诉诸于观察到类似的补丁应该分享其选择。关于分类,分割和检测任务的广泛实验证明了我们方法的优势,例如,预先培训的VIT-B在Imagenet-1K分类上达到了84.1%的TOP-1微调精度,49.2%AP^B和44.0%对象检测和可可的实例分割的AP^m,在ADE20K语义分段上为50.8%,表现优于竞争性对应物。该代码将在https://github.com/lixiaotong97/mc-beit上找到。
translated by 谷歌翻译
本文研究了一系列方面情绪分类(ASC)任务的持续学习(CL)。虽然已经提出了一些CL技术进行了文档情绪分类,但我们不知道任何CL在ASC上工作。逐步学习一系列ASC任务的CL系统应该解决以下两个问题:(1)将从以前任务的传输知识从以前的任务中学到的新任务,以帮助它学习更好的模型,并且(2)保持模型的性能以前的任务让他们没有忘记。本文提出了一种新颖的基于胶囊网络的模型,称为B-CL以解决这些问题。B-CL通过前向和后向知识传输显着提高了新任务和旧任务的ASC性能。通过广泛的实验证明了B-CL的有效性。
translated by 谷歌翻译
本文研究了一个特定CL设置中的一系列方面情绪分类(ASC)任务的持续学习(CL),称为域增量学习(DIL)。每个任务都来自不同的域或产品。DIL设置特别适合ASC,因为在测试中,系统不需要知道测试数据所属的任务/域。据我们所知,此环境尚未在ASC之前进行过研究。本文提出了一种名为CLASSIC的新型模型。关键新颖性是一种对比的持续学习方法,可以通过从旧任务到新任务的任务和知识蒸馏的知识转移,这消除了对测试中的任务ID的需求。实验结果表明了经典的高效性。
translated by 谷歌翻译
持续学习(CL)逐步学习一系列任务,其目标是实现两个主要目标:克服灾难性的遗忘(CF)并鼓励跨任务的知识转移(KT)。然而,大多数现有技术只关注克服CF并且没有鼓励KT的机制,因此在KT中不好做得很好。虽然有几篇论文试图处理CF和KT,但我们的实验表明,当任务没有太多的共享知识时,他们患有严重的CF。另一个观察是,大多数电流CL方法不使用预先训练的型号,但已经表明这种模型可以显着提高结束任务性能。例如,在自然语言处理中,微调伯特的预训练语言模型是最有效的方法之一。然而,对于CL,这种方法遭受严重的CF.一个有趣的问题是如何充分利用预先训练的电流模型。本文提出了一种名为CTR的新型模型来解决这些问题。我们的实验结果表明了CTR的有效性
translated by 谷歌翻译
We study the composition style in deep image matting, a notion that characterizes a data generation flow on how to exploit limited foregrounds and random backgrounds to form a training dataset. Prior art executes this flow in a completely random manner by simply going through the foreground pool or by optionally combining two foregrounds before foreground-background composition. In this work, we first show that naive foreground combination can be problematic and therefore derive an alternative formulation to reasonably combine foregrounds. Our second contribution is an observation that matting performance can benefit from a certain occurrence frequency of combined foregrounds and their associated source foregrounds during training. Inspired by this, we introduce a novel composition style that binds the source and combined foregrounds in a definite triplet. In addition, we also find that different orders of foreground combination lead to different foreground patterns, which further inspires a quadruplet-based composition style. Results under controlled experiments on four matting baselines show that our composition styles outperform existing ones and invite consistent performance improvement on both composited and real-world datasets. Code is available at: https://github.com/coconuthust/composition_styles
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Script is a kind of structured knowledge extracted from texts, which contains a sequence of events. Based on such knowledge, script event prediction aims to predict the subsequent event. To do so, two aspects should be considered for events, namely, event description (i.e., what the events should contain) and event encoding (i.e., how they should be encoded). Most existing methods describe an event by a verb together with only a few core arguments (i.e., subject, object, and indirect object), which are not precise. In addition, existing event encoders are limited to a fixed number of arguments, which are not flexible to deal with extra information. Thus, in this paper, we propose the Rich Event Prediction (REP) framework for script event prediction. Fundamentally, it is based on the proposed rich event description, which enriches the existing ones with three kinds of important information, namely, the senses of verbs, extra semantic roles, and types of participants. REP contains an event extractor to extract such information from texts. Based on the extracted rich information, a predictor then selects the most probable subsequent event. The core component of the predictor is a transformer-based event encoder to flexibly deal with an arbitrary number of arguments. Experimental results on the widely used Gigaword Corpus show the effectiveness of the proposed framework.
translated by 谷歌翻译
A hallmark of the deep learning era for computer vision is the successful use of large-scale labeled datasets to train feature representations for tasks ranging from object recognition and semantic segmentation to optical flow estimation and novel view synthesis of 3D scenes. In this work, we aim to learn dense discriminative object representations for low-shot category recognition without requiring any category labels. To this end, we propose Deep Object Patch Encodings (DOPE), which can be trained from multiple views of object instances without any category or semantic object part labels. To train DOPE, we assume access to sparse depths, foreground masks and known cameras, to obtain pixel-level correspondences between views of an object, and use this to formulate a self-supervised learning task to learn discriminative object patches. We find that DOPE can directly be used for low-shot classification of novel categories using local-part matching, and is competitive with and outperforms supervised and self-supervised learning baselines. Code and data available at https://github.com/rehg-lab/dope_selfsup.
translated by 谷歌翻译
Continual learning (CL) learns a sequence of tasks incrementally. There are two popular CL settings, class incremental learning (CIL) and task incremental learning (TIL). A major challenge of CL is catastrophic forgetting (CF). While a number of techniques are already available to effectively overcome CF for TIL, CIL remains to be highly challenging. So far, little theoretical study has been done to provide a principled guidance on how to solve the CIL problem. This paper performs such a study. It first shows that probabilistically, the CIL problem can be decomposed into two sub-problems: Within-task Prediction (WP) and Task-id Prediction (TP). It further proves that TP is correlated with out-of-distribution (OOD) detection, which connects CIL and OOD detection. The key conclusion of this study is that regardless of whether WP and TP or OOD detection are defined explicitly or implicitly by a CIL algorithm, good WP and good TP or OOD detection are necessary and sufficient for good CIL performances. Additionally, TIL is simply WP. Based on the theoretical result, new CIL methods are also designed, which outperform strong baselines in both CIL and TIL settings by a large margin.
translated by 谷歌翻译